Okay, the main leftover question is how do we deal with amnesia?
How can we make sure that we have a good chance actually hitting the global maxima, getting out of ridges and shoulders and so on?
And how do we deal with that? There is a technique which works very well in practice, which is called simulated annealing.
And the idea there is that depending on your space, we very often have what is sometimes called ridges,
which kind of has a series of ascending local maxima, which you really want to follow one after the other.
And you really want to kind of follow, when you're here, you want to jump out of that and land somewhere on the ridge
and kind of move up the ridge towards a better world.
Well, that's the global maximum we don't know because there could be lots and lots and lots of ridges.
Whenever I draw something, then if I'm good, on a good day I can draw a free space.
Typically, for interesting problems, you have a thousand dimensional space, which is kind of big for my head, so I'm not going to draw it.
But you can imagine the more dimensions you have, the more kind of chances you have for ridges and plateaus and all of those kind of things.
So in these high dimensional spaces, things get really, really complicated and the chances are of getting stuck somewhere in dimension 743 ridge are huge.
Fortunately, there's something you can do and it's kind of the idea comes from heating steel.
So if you make steel, apparently it's very important to have small, roundish crystals in the steel.
And the smaller and roundisher these things are, the better the steel is.
And there's a couple of things you can do. You can either heat the steel and bang it with a hammer, which is what they did in medieval times.
They kind of heated the steel for a sword 20 times and banged it with a hammer a couple of thousand times and that made good small crystals.
That's not efficient enough for nowadays. So what you do is you really heat the steel and let it cool extremely gradually so that the steel crystals have a chance to reorganize into wonderful crystals.
And that's where we get our steel. And that process is called annealing.
And this idea that you're gradually cooling this is something you can apply to local search as well.
And I would like to kind of give you a mental picture of this.
Take this picture here, three dimensional, two dimensional manifold with hills and valleys and ridges and all of those kind of things.
Flip it around so that you're actually looking for a local minima and put a ping pong ball into it.
And a ping pong ball is a wonderful searching tool for local minima.
You put it randomly somewhere, let it drop, and it's found one, depending on how deep it goes.
And then it's in a local minimum. And what do you do? Well, we randomly restart.
So what you do is you really take your manifold landscape here, shake it, and the ball goes somewhere else, finds a local minimum.
Now, that you can do as long as until you're green in the face.
Now, is there a way we can actually make sure that we have a very good chance of ending up in good minima?
And there's where this annealing idea comes into place, where you basically say, well, if we're in a good minimum, then we shouldn't shake out of it.
So the idea is to actually randomly start and then shake heavily so that you get out of the initial bad minima and gradually shake less and less and less.
So that you actually end up that if you're on a ridge leading somewhere, you can kind of get into the next or the over next or so.
So you're getting a local minimum, getting better. But eventually you cool down until you have settled on a minimum that you're then stuck with.
Now, the surprising thing is this actually works. Just like it works with steel, it works with local search as well.
You can even make this precise. There's the procedure, but think about shaking less and less and less, which basically means you kind of in your random restart, kind of go shorter and shorter jumps.
Whatever that means in your high dimensional space. So you have kind of essentially the old algorithm, but with this cooling kind of threaded in.
You can actually make this property of finding good local minima, precise doing statistics in all kinds of ways we don't know here in this course about.
And that basically says that you're finding good local minima. And that actually makes simulated annealing a very, very successful kind of local search procedure.
And that's at the core of very many search based applications like traveling salesman's like. Yeah, configurations. You can do it for Queens, of course, and so on.
Relatively simple idea, but works quite well. And we don't really know anything better yet. Why? Yes. Why don't we know anything better yet?
Okay, I'll try a couple of things. One is because we're doing local search.
We shouldn't use a lot of memory to be clever. You need memory.
And in these high dimensional situations, using any kind of memory is typically too much.
Okay, we want to be more clever. Let's do a star. Oh, unfortunately, we don't have enough memory for that.
So I cannot do anything clever without without memory, essentially.
And branching factor, a thousand exponential trees.
No fun. So you have to do something as general as this.
That's mostly it. We don't have enough memory.
Okay, but there are a couple of things we can do. We can kind of take the gradual set up and do something else.
So why not use K ping pong balls instead of one? Right. Maybe K states is something we can remember.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:22:11 Min
Aufnahmedatum
2020-10-28
Hochgeladen am
2020-10-28 10:56:57
Sprache
en-US
Simulated Annealing, Local beam search, Genetic algorithms and stories about RoboCup.